1,043 research outputs found

    A Tale of Two Animats: What does it take to have goals?

    Full text link
    What does it take for a system, biological or not, to have goals? Here, this question is approached in the context of in silico artificial evolution. By examining the informational and causal properties of artificial organisms ('animats') controlled by small, adaptive neural networks (Markov Brains), this essay discusses necessary requirements for intrinsic information, autonomy, and meaning. The focus lies on comparing two types of Markov Brains that evolved in the same simple environment: one with purely feedforward connections between its elements, the other with an integrated set of elements that causally constrain each other. While both types of brains 'process' information about their environment and are equally fit, only the integrated one forms a causally autonomous entity above a background of external influences. This suggests that to assess whether goals are meaningful for a system itself, it is important to understand what the system is, rather than what it does.Comment: This article is a contribution to the FQXi 2016-2017 essay contest "Wandering Towards a Goal

    PyPhi: A toolbox for integrated information theory

    Full text link
    Integrated information theory provides a mathematical framework to fully characterize the cause-effect structure of a physical system. Here, we introduce PyPhi, a Python software package that implements this framework for causal analysis and unfolds the full cause-effect structure of discrete dynamical systems of binary elements. The software allows users to easily study these structures, serves as an up-to-date reference implementation of the formalisms of integrated information theory, and has been applied in research on complexity, emergence, and certain biological questions. We first provide an overview of the main algorithm and demonstrate PyPhi's functionality in the course of analyzing an example system, and then describe details of the algorithm's design and implementation. PyPhi can be installed with Python's package manager via the command 'pip install pyphi' on Linux and macOS systems equipped with Python 3.4 or higher. PyPhi is open-source and licensed under the GPLv3; the source code is hosted on GitHub at https://github.com/wmayner/pyphi . Comprehensive and continually-updated documentation is available at https://pyphi.readthedocs.io/ . The pyphi-users mailing list can be joined at https://groups.google.com/forum/#!forum/pyphi-users . A web-based graphical interface to the software is available at http://integratedinformationtheory.org/calculate.html .Comment: 22 pages, 4 figures, 6 pages of appendices. Supporting information "S1 Calculating Phi" can be found in the ancillary file

    A topological approach to neural complexity

    Full text link
    Considerable efforts in modern statistical physics is devoted to the study of networked systems. One of the most important example of them is the brain, which creates and continuously develops complex networks of correlated dynamics. An important quantity which captures fundamental aspects of brain network organization is the neural complexity C(X)introduced by Tononi et al. This work addresses the dependence of this measure on the topological features of a network in the case of gaussian stationary process. Both anlytical and numerical results show that the degree of complexity has a clear and simple meaning from a topological point of view. Moreover the analytical result offers a straightforward algorithm to compute the complexity than the standard one.Comment: 6 pages, 4 figure

    Integrated Information in Discrete Dynamical Systems: Motivation and Theoretical Framework

    Get PDF
    This paper introduces a time- and state-dependent measure of integrated information, φ, which captures the repertoire of causal states available to a system as a whole. Specifically, φ quantifies how much information is generated (uncertainty is reduced) when a system enters a particular state through causal interactions among its elements, above and beyond the information generated independently by its parts. Such mathematical characterization is motivated by the observation that integrated information captures two key phenomenological properties of consciousness: (i) there is a large repertoire of conscious experiences so that, when one particular experience occurs, it generates a large amount of information by ruling out all the others; and (ii) this information is integrated, in that each experience appears as a whole that cannot be decomposed into independent parts. This paper extends previous work on stationary systems and applies integrated information to discrete networks as a function of their dynamics and causal architecture. An analysis of basic examples indicates the following: (i) φ varies depending on the state entered by a network, being higher if active and inactive elements are balanced and lower if the network is inactive or hyperactive. (ii) φ varies for systems with identical or similar surface dynamics depending on the underlying causal architecture, being low for systems that merely copy or replay activity states. (iii) φ varies as a function of network architecture. High φ values can be obtained by architectures that conjoin functional specialization with functional integration. Strictly modular and homogeneous systems cannot generate high φ because the former lack integration, whereas the latter lack information. Feedforward and lattice architectures are capable of generating high φ but are inefficient. (iv) In Hopfield networks, φ is low for attractor states and neutral states, but increases if the networks are optimized to achieve tension between local and global interactions. These basic examples appear to match well against neurobiological evidence concerning the neural substrates of consciousness. More generally, φ appears to be a useful metric to characterize the capacity of any physical system to integrate information

    Modeling Resting-State Functional Networks When the Cortex Falls Asleep: Local and Global Changes

    Get PDF
    The transition from wakefulness to sleep represents the most conspicuous change in behavior and the level of consciousness occurring in the healthy brain. It is accompanied by similarly conspicuous changes in neural dynamics, traditionally exemplified by the change from "desynchronized” electroencephalogram activity in wake to globally synchronized slow wave activity of early sleep. However, unit and local field recordings indicate that the transition is more gradual than it might appear: On one hand, local slow waves already appear during wake; on the other hand, slow sleep waves are only rarely global. Studies with functional magnetic resonance imaging also reveal changes in resting-state functional connectivity (FC) between wake and slow wave sleep. However, it remains unclear how resting-state networks may change during this transition period. Here, we employ large-scale modeling of the human cortico-cortical anatomical connectivity to evaluate changes in resting-state FC when the model "falls asleep” due to the progressive decrease in arousal-promoting neuromodulation. When cholinergic neuromodulation is parametrically decreased, local slow waves appear, while the overall organization of resting-state networks does not change. Furthermore, we show that these local slow waves are structured macroscopically in networks that resemble the resting-state networks. In contrast, when the neuromodulator decrease further to very low levels, slow waves become global and resting-state networks merge into a single undifferentiated, broadly synchronized networ

    Propagation of first and second sound in a two-dimensional Fermi superfluid

    Full text link
    Sound propagation is a macroscopic manifestation of the interplay between the equilibrium thermodynamics and the dynamical transport properties of fluids. Here, for a two-dimensional system of ultracold fermions, we calculate the first and second sound velocities across the whole BCS-BEC crossover and we analyze the system response to an external perturbation. In the low-temperature regime we reproduce the recent measurements [Phys Rev. Lett. {\bf 124}, 240403 (2020)] of the first sound velocity, which, due to the decoupling of density and entropy fluctuations, is the sole mode excited by a density probe. Conversely, a heat perturbation excites only the second sound, which, being sensitive to the superfluid depletion, vanishes in the deep BCS regime, and jumps discontinuously to zero at the Berezinskii-Kosterlitz-Thouless superfluid transition. A mixing between the modes occurs only in the finite-temperature BEC regime, where our theory converges to the purely bosonic results.Comment: 6 pages, 3 figures; published version, correction of journal referenc

    Cognition as Embodied Morphological Computation

    Get PDF
    Cognitive science is considered to be the study of mind (consciousness and thought) and intelligence in humans. Under such definition variety of unsolved/unsolvable problems appear. This article argues for a broad understanding of cognition based on empirical results from i.a. natural sciences, self-organization, artificial intelligence and artificial life, network science and neuroscience, that apart from the high level mental activities in humans, includes sub-symbolic and sub-conscious processes, such as emotions, recognizes cognition in other living beings as well as extended and distributed/social cognition. The new idea of cognition as complex multiscale phenomenon evolved in living organisms based on bodily structures that process information, linking cognitivists and EEEE (embodied, embedded, enactive, extended) cognition approaches with the idea of morphological computation (info-computational self-organisation) in cognizing agents, emerging in evolution through interactions of a (living/cognizing) agent with the environment

    Complexity of multi-dimensional spontaneous EEG decreases during propofol induced general anaesthesia

    Get PDF
    Emerging neural theories of consciousness suggest a correlation between a specific type of neural dynamical complexity and the level of consciousness: When awake and aware, causal interactions between brain regions are both integrated (all regions are to a certain extent connected) and differentiated (there is inhomogeneity and variety in the interactions). In support of this, recent work by Casali et al (2013) has shown that Lempel-Ziv complexity correlates strongly with conscious level, when computed on the EEG response to transcranial magnetic stimulation. Here we investigated complexity of spontaneous high-density EEG data during propofol-induced general anaesthesia. We consider three distinct measures: (i) Lempel-Ziv complexity, which is derived from how compressible the data are; (ii) amplitude coalition entropy, which measures the variability in the constitution of the set of active channels; and (iii) the novel synchrony coalition entropy (SCE), which measures the variability in the constitution of the set of synchronous channels. After some simulations on Kuramoto oscillator models which demonstrate that these measures capture distinct ‘flavours’ of complexity, we show that there is a robustly measurable decrease in the complexity of spontaneous EEG during general anaesthesia
    corecore